In [ ]:
!jupyter nbconvert --to html '/content/face_expression_ (1).ipynb'
[NbConvertApp] Converting notebook /content/face_expression_ (1).ipynb to html
[NbConvertApp] Writing 8650206 bytes to /content/face_expression_ (1).html

github link- https://github.com/levi-lit/Efficient-Net-Model hugging face - https://huggingface.co/levi15

Task Sub Task Comments
Data Preprocessing Scaling and Resizing Done
Image Augmentation Done
Train and test data handled correctly Done
Gaussian Blur, Histogram Equalization and Intensity thresholds Done
Model Trained Training Time 2095.284 seconds
AUC and Confusion Matrix Computed Done
Overfitting/Underfitting checked and handled Done
Empirical Tuning Interpretability Implemented Done
1st Round of Tuning Tried but failed with errors
2nd Round of Tuning Done, using the dropout as 0.3 and learning rate as .001 gives the best result. I could only run 1 epoch because it was taking too long. Running more epochs will help to achieve better performance.
Done, implemented L2 regularization and with the same set of dropout and learning rate we get the best result. Similar issue faces, so ran only 1 epoch.
Deployment Using Streamlit for deployment use ngrok and hugging face,Created app.py, index.html, requirements.txt but having an error regarding model configuration
In [1]:
import tensorflow as tf
import os
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import random
import warnings
warnings.filterwarnings("ignore")
In [ ]:
 
In [2]:
for dirpath, dirnames, filenames in os.walk("/home/archie/Music/ml2/FaceExpressions/dataset"):
  print(f"There are {len(filenames)} images in '{dirpath}'.")

visualizing the images¶

In [3]:
def view_random_images(target_class, num_images=8):
  target_dir="/home/archie/Music/ml2/FaceExpressions/dataset"
  target_folder = os.path.join(target_dir, target_class)

  random_images = random.sample(os.listdir(target_folder), num_images)

  # Create a subplot to display multiple images
  plt.figure(figsize=(12, 6))
  for i, image in enumerate(random_images):
    plt.subplot(2, 4, i + 1)  # 2 rows, 4 columns
    img = mpimg.imread(os.path.join(target_folder, image))
    plt.imshow(img)
    plt.title(f"{target_class} - {i + 1}")
    plt.axis("off")

  plt.show()
In [ ]:
view_random_images("Angry",8)
In [ ]:
view_random_images("Surprise",8)

Data preprocessing¶

In [ ]:
data_dir='/home/archie/Music/ml2/FaceExpressions/dataset'
IMG_SIZE = (224, 224) # image size
total_data= tf.keras.preprocessing.image_dataset_from_directory(directory=data_dir,
                                                                            image_size=IMG_SIZE,
                                                                            label_mode="categorical",
                                                                            batch_size=32)
In [ ]:
total_samples = total_data.cardinality().numpy()

train_size = int(0.8 * total_samples)
test_size = total_samples - train_size

# Split the dataset
train_data = total_data.take(train_size)
test_data = total_data.skip(train_size)

print("Number of samples in training set:", train_size)
print("Number of samples in testing set:", test_size)
Number of samples in training set: 386
Number of samples in testing set: 97
In [ ]:
total_data.class_names
Out[ ]:
['Ahegao', 'Angry', 'Happy', 'Neutral', 'Sad', 'Surprise']

Creating Callbacks¶

  • Tensorbord callback
In [ ]:
checkpoint_path="Expresion/checkpoint.ckpt"
checkpoint_callback=tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path,
                                                          save_weights_only=True,
                                                          save_best_only=True,
                                                          save_freq="epoch",
                                                          verbose=1)
In [ ]:
import datetime
def create_tensorboard_callback(dir_name, experiment_name):
  log_dir = dir_name + "/" + experiment_name + "/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
  tensorboard_callback = tf.keras.callbacks.TensorBoard(
      log_dir=log_dir
  )
  print(f"Saving TensorBoard log files to: {log_dir}")
  return tensorboard_callback

Mixed_precision¶

  • speedup_the_training upto 3x
In [ ]:
from tensorflow.keras import mixed_precision
mixed_precision.set_global_policy('mixed_float16')

Data augmentaion sequencial layer¶

In [ ]:
from tensorflow.keras import layers
from tensorflow.keras.models import Sequential
data_augmentation = Sequential([
  layers.RandomFlip("horizontal"),
  layers.RandomRotation(0.2),
  layers.RandomZoom(0.2),
  layers.RandomHeight(0.2),
  layers.RandomWidth(0.2),
], name ="data_augmentation")

First Model¶

In [ ]:
from tensorflow import keras
from tensorflow.keras import layers
In [ ]:
# Setup input shape and base model, freezing the base model layers
input_shape = (224, 224, 3)
base_model = tf.keras.applications.EfficientNetB7(include_top=False)
base_model.trainable = False

# Create input layer
inputs = layers.Input(shape=input_shape, name="input_layer")

# Give base_model the inputs (after augmentation) and don't train it
x = base_model(inputs, training=False)

# Pool output features of the base model
x = layers.GlobalAveragePooling2D(name="global_average_pooling_layer")(x)

# Put a dense layer on as the output
outputs = layers.Dense(6, activation="softmax",dtype='float32', name="output_layer")(x)
#dtype='float32' because we are using mixpresion

# Make a model using the inputs and outputs
model_1 = keras.Model(inputs, outputs)
Downloading data from https://storage.googleapis.com/keras-applications/efficientnetb7_notop.h5
258076736/258076736 [==============================] - 6s 0us/step
In [ ]:
from tensorflow.keras.utils import plot_model
plot_model(model_1)
Out[ ]:

I tried but Augmentation Layer do not increse accuracy despite it decrease in my case,so i removed it

In [ ]:
#complile the model
model_1.compile(loss=tf.keras.losses.categorical_crossentropy,
                optimizer=tf.keras.optimizers.Adam(learning_rate=0.001),
                metrics=['accuracy'])
model_1.summary()
Model: "model"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 input_layer (InputLayer)    [(None, 224, 224, 3)]     0         
                                                                 
 efficientnetb7 (Functional  (None, None, None, 2560   64097687  
 )                           )                                   
                                                                 
 global_average_pooling_lay  (None, 2560)              0         
 er (GlobalAveragePooling2D                                      
 )                                                               
                                                                 
 output_layer (Dense)        (None, 6)                 15366     
                                                                 
=================================================================
Total params: 64113053 (244.57 MB)
Trainable params: 15366 (60.02 KB)
Non-trainable params: 64097687 (244.51 MB)
_________________________________________________________________
In [ ]:
history_model_1_efficientB7 = model_1.fit(train_data,
                                           steps_per_epoch=len(train_data),
                                           epochs=10,
                                           batch_size=32,
                                           validation_data=test_data,
                                           validation_steps=len(test_data),
                                           callbacks=[checkpoint_callback]
                                       )
Epoch 1/10
386/386 [==============================] - ETA: 0s - loss: 1.0474 - accuracy: 0.5924
Epoch 1: val_loss improved from inf to 0.91243, saving model to Expresion/checkpoint.ckpt
386/386 [==============================] - 382s 926ms/step - loss: 1.0474 - accuracy: 0.5924 - val_loss: 0.9124 - val_accuracy: 0.6520
Epoch 2/10
386/386 [==============================] - ETA: 0s - loss: 0.8687 - accuracy: 0.6618
Epoch 2: val_loss improved from 0.91243 to 0.85923, saving model to Expresion/checkpoint.ckpt
386/386 [==============================] - 121s 311ms/step - loss: 0.8687 - accuracy: 0.6618 - val_loss: 0.8592 - val_accuracy: 0.6678
Epoch 3/10
386/386 [==============================] - ETA: 0s - loss: 0.8100 - accuracy: 0.6862
Epoch 3: val_loss improved from 0.85923 to 0.83798, saving model to Expresion/checkpoint.ckpt
386/386 [==============================] - 121s 312ms/step - loss: 0.8100 - accuracy: 0.6862 - val_loss: 0.8380 - val_accuracy: 0.6743
Epoch 4/10
386/386 [==============================] - ETA: 0s - loss: 0.7728 - accuracy: 0.7017
Epoch 4: val_loss improved from 0.83798 to 0.81918, saving model to Expresion/checkpoint.ckpt
386/386 [==============================] - 121s 312ms/step - loss: 0.7728 - accuracy: 0.7017 - val_loss: 0.8192 - val_accuracy: 0.6946
Epoch 5/10
386/386 [==============================] - ETA: 0s - loss: 0.7431 - accuracy: 0.7142
Epoch 5: val_loss improved from 0.81918 to 0.81248, saving model to Expresion/checkpoint.ckpt
386/386 [==============================] - 121s 313ms/step - loss: 0.7431 - accuracy: 0.7142 - val_loss: 0.8125 - val_accuracy: 0.6933
Epoch 6/10
386/386 [==============================] - ETA: 0s - loss: 0.7223 - accuracy: 0.7239
Epoch 6: val_loss improved from 0.81248 to 0.80536, saving model to Expresion/checkpoint.ckpt
386/386 [==============================] - 121s 311ms/step - loss: 0.7223 - accuracy: 0.7239 - val_loss: 0.8054 - val_accuracy: 0.6920
Epoch 7/10
386/386 [==============================] - ETA: 0s - loss: 0.7026 - accuracy: 0.7336
Epoch 7: val_loss improved from 0.80536 to 0.79928, saving model to Expresion/checkpoint.ckpt
386/386 [==============================] - 121s 311ms/step - loss: 0.7026 - accuracy: 0.7336 - val_loss: 0.7993 - val_accuracy: 0.6956
Epoch 8/10
386/386 [==============================] - ETA: 0s - loss: 0.6853 - accuracy: 0.7373
Epoch 8: val_loss improved from 0.79928 to 0.79185, saving model to Expresion/checkpoint.ckpt
386/386 [==============================] - 121s 311ms/step - loss: 0.6853 - accuracy: 0.7373 - val_loss: 0.7918 - val_accuracy: 0.7001
Epoch 9/10
386/386 [==============================] - ETA: 0s - loss: 0.6673 - accuracy: 0.7439
Epoch 9: val_loss improved from 0.79185 to 0.78342, saving model to Expresion/checkpoint.ckpt
386/386 [==============================] - 120s 311ms/step - loss: 0.6673 - accuracy: 0.7439 - val_loss: 0.7834 - val_accuracy: 0.7101
Epoch 10/10
386/386 [==============================] - ETA: 0s - loss: 0.6567 - accuracy: 0.7457
Epoch 10: val_loss improved from 0.78342 to 0.77691, saving model to Expresion/checkpoint.ckpt
386/386 [==============================] - 127s 329ms/step - loss: 0.6567 - accuracy: 0.7457 - val_loss: 0.7769 - val_accuracy: 0.7020
In [ ]:
# Plot the validation and training data separately
def plot_loss_curves(history):
  """
  Returns separate loss curves for training and validation metrics.
  """
  loss = history.history['loss']
  val_loss = history.history['val_loss']

  accuracy = history.history['accuracy']
  val_accuracy = history.history['val_accuracy']

  epochs = range(len(history.history['loss']))

  # Plot loss
  plt.plot(epochs, loss, label='training_loss')
  plt.plot(epochs, val_loss, label='val_loss')
  plt.title('Loss')
  plt.xlabel('Epochs')
  plt.legend()

  # Plot accuracy
  plt.figure()
  plt.plot(epochs, accuracy, label='training_accuracy')
  plt.plot(epochs, val_accuracy, label='val_accuracy')
  plt.title('Accuracy')
  plt.xlabel('Epochs')
  plt.legend();
In [ ]:
plot_loss_curves(history_model_1_efficientB7)

Now ploting ROC curve for all the class 6

In [ ]:
from sklearn.metrics import roc_curve, auc
from tensorflow.keras.utils import to_categorical
import matplotlib.pyplot as plt
import numpy as np
# First, get the true labels and predictions
y_true = np.concatenate([y for x, y in test_data], axis=0)
y_pred = model_1.predict(test_data)

y_true = np.argmax(y_true, axis=1)

# Calculate ROC curve and ROC area for each class
n_classes = 6 # Number of classes
fpr = dict()
tpr = dict()
roc_auc = dict()

for i in range(n_classes):
    fpr[i], tpr[i], _ = roc_curve(to_categorical(y_true, num_classes=n_classes)[:, i], y_pred[:, i])
    roc_auc[i] = auc(fpr[i], tpr[i])

# Plot of a ROC curve for a specific class
for i in range(n_classes):
    plt.figure()
    plt.plot(fpr[i], tpr[i], label='ROC curve (area = %0.2f)' % roc_auc[i])
    plt.plot([0, 1], [0, 1], 'k--')
    plt.xlim([0.0, 1.0])
    plt.ylim([0.0, 1.05])
    plt.xlabel('False Positive Rate')
    plt.ylabel('True Positive Rate')
    plt.title('Receiver operating characteristic example')
    plt.legend(loc="lower right")
    plt.show()
97/97 [==============================] - 44s 181ms/step

Lets fine tune our model by unfreezing some of the layer¶

In [ ]:
model_1.load_weights(checkpoint_path)
Out[ ]:
<tensorflow.python.checkpoint.checkpoint.CheckpointLoadStatus at 0x78a926f802b0>
In [ ]:
model_1.evaluate(test_data)
97/97 [==============================] - 45s 186ms/step - loss: 0.7860 - accuracy: 0.6988
Out[ ]:
[0.7860016226768494, 0.6988068222999573]
In [ ]:
# Are these layers trainable?
for layer in model_1.layers:
  print(layer, layer.trainable)
<keras.src.engine.input_layer.InputLayer object at 0x78adbcf852d0> True
<keras.src.engine.functional.Functional object at 0x78ada4e08b20> False
<keras.src.layers.pooling.global_average_pooling2d.GlobalAveragePooling2D object at 0x78ae4aafca90> True
<keras.src.layers.core.dense.Dense object at 0x78ae7ba96350> True
In [ ]:
# Access the base_model layers of model_1
model_1_base_model = model_1.layers[1]
model_1_base_model.name
Out[ ]:
'efficientnetb7'
In [ ]:
# Make all the layers in model_1_base_model trainable
model_1_base_model.trainable = True

# Freeze all layers except for the last 15
for layer in model_1_base_model.layers[:-15]:
  layer.trainable = False

# Recompile (we have to recompile our models every time we make a change)
model_1.compile(loss="categorical_crossentropy",
                optimizer=tf.keras.optimizers.Adam(learning_rate=0.0001), # when fine-tuning you typically want to lower the learning rate by 10x*
                metrics=["accuracy"])
print(len(model_1.trainable_variables))
15
In [ ]:
# Fine-tune for another 10 epochs
initial_epoch=10
fine_tune_epochs = initial_epoch + 5

# Refit the model (same as model_1 except with more trainable layers)
history_model_1_efficientB7_fineTune = model_1.fit(train_data,
                                                steps_per_epoch=len(train_data),
                                                epochs=fine_tune_epochs,
                                                batch_size=32,
                                                validation_data=test_data,
                                                validation_steps=len(test_data),
                                                initial_epoch=history_model_1_efficientB7.epoch[-1],# start from the previous last epoch
                                                callbacks=[checkpoint_callback]
                                                           )
Epoch 10/15
386/386 [==============================] - ETA: 0s - loss: 0.6522 - accuracy: 0.7461
Epoch 10: val_loss improved from 0.77691 to 0.73918, saving model to Expresion/checkpoint.ckpt
386/386 [==============================] - 154s 331ms/step - loss: 0.6522 - accuracy: 0.7461 - val_loss: 0.7392 - val_accuracy: 0.7140
Epoch 11/15
386/386 [==============================] - ETA: 0s - loss: 0.5376 - accuracy: 0.7921
Epoch 11: val_loss improved from 0.73918 to 0.71505, saving model to Expresion/checkpoint.ckpt
386/386 [==============================] - 123s 318ms/step - loss: 0.5376 - accuracy: 0.7921 - val_loss: 0.7150 - val_accuracy: 0.7288
Epoch 12/15
386/386 [==============================] - ETA: 0s - loss: 0.4486 - accuracy: 0.8335
Epoch 12: val_loss did not improve from 0.71505
386/386 [==============================] - 121s 312ms/step - loss: 0.4486 - accuracy: 0.8335 - val_loss: 0.7378 - val_accuracy: 0.7169
Epoch 13/15
386/386 [==============================] - ETA: 0s - loss: 0.3627 - accuracy: 0.8671
Epoch 13: val_loss did not improve from 0.71505
386/386 [==============================] - 121s 312ms/step - loss: 0.3627 - accuracy: 0.8671 - val_loss: 0.7272 - val_accuracy: 0.7417
Epoch 14/15
386/386 [==============================] - ETA: 0s - loss: 0.2904 - accuracy: 0.9000
Epoch 14: val_loss did not improve from 0.71505
386/386 [==============================] - 121s 313ms/step - loss: 0.2904 - accuracy: 0.9000 - val_loss: 0.7788 - val_accuracy: 0.7227
Epoch 15/15
386/386 [==============================] - ETA: 0s - loss: 0.2304 - accuracy: 0.9226
Epoch 15: val_loss did not improve from 0.71505
386/386 [==============================] - 121s 312ms/step - loss: 0.2304 - accuracy: 0.9226 - val_loss: 0.7900 - val_accuracy: 0.7401
In [ ]:
model_1.load_weights(checkpoint_path)
Out[ ]:
<tensorflow.python.checkpoint.checkpoint.CheckpointLoadStatus at 0x78a9290d7610>
In [ ]:
model_1.evaluate(test_data)
97/97 [==============================] - 46s 190ms/step - loss: 0.7168 - accuracy: 0.7275
Out[ ]:
[0.7168375849723816, 0.7275072336196899]

fine tuning¶

In [ ]:
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay
import numpy as np

y_pred_labels = np.argmax(y_pred, axis=1)  # Convert probabilities to predicted labels

# Compute the confusion matrix
cm = confusion_matrix(y_true, y_pred_labels)
disp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=total_data.class_names)

# Plot the confusion matrix
disp.plot(cmap=plt.cm.Blues)
plt.show()
In [ ]:
from sklearn.metrics import roc_auc_score

# Calculate the ROC-AUC score
roc_auc = roc_auc_score(to_categorical(y_true, num_classes=n_classes), y_pred, multi_class='ovr')

print(f"Average ROC-AUC score: {roc_auc:.2f}")
Average ROC-AUC score: 0.51
In [ ]:
checkpoint_path="BigModel/checkpoint.ckpt"
checkpoint_callback=tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path,
                                                          save_weights_only=True,
                                                          save_best_only=True,
                                                          save_freq="epoch",
                                                          verbose=1)
In [ ]:
base_model=tf.keras.applications.efficientnet_v2.EfficientNetV2L(include_top=False)

base_model.trainable=False

input=tf.keras.layers.Input(shape=(224,224,3),name="inpute_layer")
#x=tf.keras.layers.experimental.preprocessing.Rescaling(1./255)(input)

# Pass the inputs to the base model
x=base_model(input)
#Average Pool the outputs of the base model(aggregate all the most important information,reduce number of computations)
x=tf.keras.layers.GlobalAveragePooling2D(name="global_avrage_pooling")(x)
# Create the inpute with the activation layer
outputs=tf.keras.layers.Dense(6,activation="softmax",dtype='float32',name="Output_layer")(x)

# Combine the inpute with output in mode
model_2=tf.keras.Model(input,outputs)
Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/efficientnet_v2/efficientnetv2-l_notop.h5
473176280/473176280 [==============================] - 12s 0us/step
In [ ]:
#complile the model
model_2.compile(loss=tf.keras.losses.categorical_crossentropy,
                optimizer=tf.keras.optimizers.Adam(learning_rate=0.001),
                metrics=['accuracy'])
model_2.summary()
Model: "model_1"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 inpute_layer (InputLayer)   [(None, 224, 224, 3)]     0         
                                                                 
 efficientnetv2-l (Function  (None, None, None, 1280   117746848 
 al)                         )                                   
                                                                 
 global_avrage_pooling (Glo  (None, 1280)              0         
 balAveragePooling2D)                                            
                                                                 
 Output_layer (Dense)        (None, 6)                 7686      
                                                                 
=================================================================
Total params: 117754534 (449.20 MB)
Trainable params: 7686 (30.02 KB)
Non-trainable params: 117746848 (449.17 MB)
_________________________________________________________________
In [ ]:
history_model_2= model_2.fit(train_data,
                            steps_per_epoch=len(train_data),
                            epochs=10,
                            batch_size=32,
                            validation_data=test_data,
                            validation_steps=len(test_data),
                            callbacks=[checkpoint_callback]
                              )
Epoch 1/10
386/386 [==============================] - ETA: 0s - loss: 1.1502 - accuracy: 0.5589
Epoch 1: val_loss improved from inf to 1.00935, saving model to BigModel/checkpoint.ckpt
386/386 [==============================] - 153s 315ms/step - loss: 1.1502 - accuracy: 0.5589 - val_loss: 1.0093 - val_accuracy: 0.6124
Epoch 2/10
386/386 [==============================] - ETA: 0s - loss: 0.9743 - accuracy: 0.6214
Epoch 2: val_loss improved from 1.00935 to 0.94321, saving model to BigModel/checkpoint.ckpt
386/386 [==============================] - 116s 298ms/step - loss: 0.9743 - accuracy: 0.6214 - val_loss: 0.9432 - val_accuracy: 0.6311
Epoch 3/10
386/386 [==============================] - ETA: 0s - loss: 0.9220 - accuracy: 0.6413
Epoch 3: val_loss improved from 0.94321 to 0.91750, saving model to BigModel/checkpoint.ckpt
386/386 [==============================] - 122s 314ms/step - loss: 0.9220 - accuracy: 0.6413 - val_loss: 0.9175 - val_accuracy: 0.6385
Epoch 4/10
386/386 [==============================] - ETA: 0s - loss: 0.8892 - accuracy: 0.6530
Epoch 4: val_loss improved from 0.91750 to 0.90086, saving model to BigModel/checkpoint.ckpt
386/386 [==============================] - 115s 297ms/step - loss: 0.8892 - accuracy: 0.6530 - val_loss: 0.9009 - val_accuracy: 0.6482
Epoch 5/10
386/386 [==============================] - ETA: 0s - loss: 0.8679 - accuracy: 0.6594
Epoch 5: val_loss improved from 0.90086 to 0.88833, saving model to BigModel/checkpoint.ckpt
386/386 [==============================] - 116s 299ms/step - loss: 0.8679 - accuracy: 0.6594 - val_loss: 0.8883 - val_accuracy: 0.6559
Epoch 6/10
386/386 [==============================] - ETA: 0s - loss: 0.8481 - accuracy: 0.6670
Epoch 6: val_loss improved from 0.88833 to 0.87619, saving model to BigModel/checkpoint.ckpt
386/386 [==============================] - 116s 300ms/step - loss: 0.8481 - accuracy: 0.6670 - val_loss: 0.8762 - val_accuracy: 0.6666
Epoch 7/10
386/386 [==============================] - ETA: 0s - loss: 0.8359 - accuracy: 0.6745
Epoch 7: val_loss improved from 0.87619 to 0.86957, saving model to BigModel/checkpoint.ckpt
386/386 [==============================] - 116s 298ms/step - loss: 0.8359 - accuracy: 0.6745 - val_loss: 0.8696 - val_accuracy: 0.6637
Epoch 8/10
386/386 [==============================] - ETA: 0s - loss: 0.8211 - accuracy: 0.6795
Epoch 8: val_loss did not improve from 0.86957
386/386 [==============================] - 113s 291ms/step - loss: 0.8211 - accuracy: 0.6795 - val_loss: 0.8707 - val_accuracy: 0.6643
Epoch 9/10
386/386 [==============================] - ETA: 0s - loss: 0.8143 - accuracy: 0.6820
Epoch 9: val_loss improved from 0.86957 to 0.85921, saving model to BigModel/checkpoint.ckpt
386/386 [==============================] - 115s 297ms/step - loss: 0.8143 - accuracy: 0.6820 - val_loss: 0.8592 - val_accuracy: 0.6753
Epoch 10/10
386/386 [==============================] - ETA: 0s - loss: 0.8045 - accuracy: 0.6900
Epoch 10: val_loss did not improve from 0.85921
386/386 [==============================] - 113s 290ms/step - loss: 0.8045 - accuracy: 0.6900 - val_loss: 0.8608 - val_accuracy: 0.6724
In [ ]:
plot_loss_curves(history_model_2)
In [ ]:
model_2.load_weights(checkpoint_path)
Out[ ]:
<tensorflow.python.checkpoint.checkpoint.CheckpointLoadStatus at 0x78a8ac774100>
In [ ]:
model_save_path = 'efficient-2.h5'
model_2.save(model_save_path)

from tensorflow.keras.models import load_model

# Load the model
model_loaded2 = load_model('efficient-2.h5')
In [ ]:
from tensorflow.keras.models import load_model

# Load the model
model_loaded = load_model('efficient.h5')
WARNING:tensorflow:Mixed precision compatibility check (mixed_float16): WARNING
The dtype policy mixed_float16 may run slowly because this machine does not have a GPU. Only Nvidia GPUs with compute capability of at least 7.0 run quickly with mixed_float16.
If you will use compatible GPU(s) not attached to this host, e.g. by running a multi-worker model, you can ignore this warning. This message will only be logged once
WARNING:tensorflow:Error in loading the saved optimizer state. As a result, your model is starting with a freshly initialized optimizer.

making predictions

In [ ]:
import os
import random
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.efficientnet import preprocess_input
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras.models import load_model

# Base path to your dataset
base_path = '/home/archie/Music/ml2/FaceExpressions/'

# Class names (folders) to match your dataset structure
class_names = ['Ahegao', 'Angry',  'Happy', 'Neutral','Sad', 'Surprise']

# Function to prepare an image
def prepare_image(img_path, img_height=224, img_width=224):
    """Prepares an image for classification by the model."""
    img = image.load_img(img_path, target_size=(img_height, img_width))
    img_array = image.img_to_array(img)
    img_array_expanded_dims = np.expand_dims(img_array, axis=0)
    return preprocess_input(img_array_expanded_dims)

# Select 3 random images from each class and predict
for class_name in class_names:
    folder_path = os.path.join(base_path, 'dataset', class_name)
    images = os.listdir(folder_path)
    selected_images = random.sample(images, 3)

    for img_name in selected_images:
        img_path = os.path.join(folder_path, img_name)

        # Print the path of the current image
        print(f"Image path: {img_path}")

        prepared_img = prepare_image(img_path)

        # Predict the class
        prediction = model_loaded.predict(prepared_img)
        predicted_class = class_names[np.argmax(prediction)]

        # Display the image and prediction
        img = plt.imread(img_path)
        plt.imshow(img)
        plt.title(f"True class: {class_name.capitalize()} \nPredicted class: {predicted_class.capitalize()}")
        plt.axis('off')
        plt.show()
Image path: /home/archie/Music/ml2/FaceExpressions/dataset/Ahegao/cropped_emotions.40209~ahegao.png
1/1 [==============================] - 19s 19s/step
Image path: /home/archie/Music/ml2/FaceExpressions/dataset/Ahegao/cropped_emotions.16964~ahegao.png
1/1 [==============================] - 4s 4s/step
Image path: /home/archie/Music/ml2/FaceExpressions/dataset/Ahegao/lol306~ahegao.png
1/1 [==============================] - 5s 5s/step
Image path: /home/archie/Music/ml2/FaceExpressions/dataset/Angry/cropped_emotions.231788~angry.png
1/1 [==============================] - 4s 4s/step
Image path: /home/archie/Music/ml2/FaceExpressions/dataset/Angry/967c722cc68bf384b492ebf7d731086a8e1b3ce226323e18fa779c79~angry.jpg
1/1 [==============================] - 4s 4s/step
Image path: /home/archie/Music/ml2/FaceExpressions/dataset/Angry/39bed989a5c51775d43c85a9cf25a8b64008eb111d3ec386ad12d497~angry.jpg
1/1 [==============================] - 4s 4s/step
Image path: /home/archie/Music/ml2/FaceExpressions/dataset/Happy/0c3502c7d27652cebd469863a9f19ace2ffe6c92656dadd4e7287d65.jpg
1/1 [==============================] - 5s 5s/step
Image path: /home/archie/Music/ml2/FaceExpressions/dataset/Happy/cropped_emotions.568851.png
1/1 [==============================] - 4s 4s/step
Image path: /home/archie/Music/ml2/FaceExpressions/dataset/Happy/2c65a273d56c6f366d7bc5d722b32b500b93675acc5990c9070a0b68.jpg
1/1 [==============================] - 4s 4s/step
Image path: /home/archie/Music/ml2/FaceExpressions/dataset/Neutral/cropped_emotions.452819f.png
1/1 [==============================] - 5s 5s/step
Image path: /home/archie/Music/ml2/FaceExpressions/dataset/Neutral/0b668067c6d1e90d81128ce524ddf8064633da6f031507f3f7655115f.jpg
1/1 [==============================] - 5s 5s/step
Image path: /home/archie/Music/ml2/FaceExpressions/dataset/Neutral/0c74e4d748038277e98ed0edb4283591743c5cc14130b25ebfe3cdf9f.jpg
1/1 [==============================] - 4s 4s/step
Image path: /home/archie/Music/ml2/FaceExpressions/dataset/Sad/5a1449e6afe27d23e1f4b6176c8ce4110d4af58700b4193457117cfb.jpg
1/1 [==============================] - 4s 4s/step
Image path: /home/archie/Music/ml2/FaceExpressions/dataset/Sad/cropped_emotions.499100.png
1/1 [==============================] - 4s 4s/step
Image path: /home/archie/Music/ml2/FaceExpressions/dataset/Sad/2c29b6319a374b0f67aa4d07bdc1a9e6fb7e83eb8f34707376e7479e.jpg
1/1 [==============================] - 4s 4s/step
Image path: /home/archie/Music/ml2/FaceExpressions/dataset/Surprise/cropped_emotions.261362~12fffff.png
1/1 [==============================] - 4s 4s/step
Image path: /home/archie/Music/ml2/FaceExpressions/dataset/Surprise/cropped_emotions.260485~12fffff.png
1/1 [==============================] - 4s 4s/step
Image path: /home/archie/Music/ml2/FaceExpressions/dataset/Surprise/cropped_emotions.266678~12fffff.png
1/1 [==============================] - 3s 3s/step
In [28]:
%%writefile app.py
import streamlit as st
import tensorflow as tf
import streamlit as st


@st.cache(allow_output_mutation=True)
def load_model():
  model=tf.keras.models.load_model('/content/my_model2.hdf5')
  return model
with st.spinner('Model is being loaded..'):
  model=load_model()

st.write("""
         # image Classification
         """
         )

file = st.file_uploader("Please upload an brain scan file", type=["jpg", "png"])
import cv2
from PIL import Image, ImageOps
import numpy as np
st.set_option('deprecation.showfileUploaderEncoding', False)
def import_and_predict(image_data, model):

        size = (180,180)
        image = ImageOps.fit(image_data, size, Image.ANTIALIAS)
        image = np.asarray(image)
        img = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
        #img_resize = (cv2.resize(img, dsize=(75, 75),    interpolation=cv2.INTER_CUBIC))/255.

        img_reshape = img[np.newaxis,...]

        prediction = model.predict(img_reshape)

        return prediction
if file is None:
    st.text("Please upload an image file")
else:
    image = Image.open(file)
    st.image(image, use_column_width=True)
    predictions = import_and_predict(image, model)
    score = tf.nn.softmax(predictions[0])
    st.write(prediction)
    st.write(score)
    print(
    "This image most likely belongs to {} with a {:.2f} percent confidence."
    .format(class_names[np.argmax(score)], 100 * np.max(score))
)
Overwriting app.py
In [29]:
!pip install ngrok
Requirement already satisfied: ngrok in /usr/local/lib/python3.10/dist-packages (1.2.0)
In [53]:
!ngrok config add-authtoken 2eqKeuz9c2zS8dU2m1KpMCzb22i_6ZpX7JoLiMWfwXwCYy8v3
Authtoken saved to configuration file: /root/.config/ngrok/ngrok.yml
In [14]:
!pip install pyngrok
Collecting pyngrok
  Downloading pyngrok-7.1.6-py3-none-any.whl (22 kB)
Requirement already satisfied: PyYAML>=5.1 in /usr/local/lib/python3.10/dist-packages (from pyngrok) (6.0.1)
Installing collected packages: pyngrok
Successfully installed pyngrok-7.1.6
In [45]:
from pyngrok import ngrok
In [54]:
!nohup streamlit run /content/app.py &
nohup: appending output to 'nohup.out'
In [65]:
!wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip
!unzip ngrok-stable-linux-amd64.zip
--2024-04-09 01:58:03--  https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip
Resolving bin.equinox.io (bin.equinox.io)... 54.237.133.81, 52.202.168.65, 18.205.222.128, ...
Connecting to bin.equinox.io (bin.equinox.io)|54.237.133.81|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 13921656 (13M) [application/octet-stream]
Saving to: ‘ngrok-stable-linux-amd64.zip’

ngrok-stable-linux- 100%[===================>]  13.28M  23.5MB/s    in 0.6s    

2024-04-09 01:58:04 (23.5 MB/s) - ‘ngrok-stable-linux-amd64.zip’ saved [13921656/13921656]

Archive:  ngrok-stable-linux-amd64.zip
  inflating: ngrok                   
In [66]:
!wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip
--2024-04-09 01:58:20--  https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip
Resolving bin.equinox.io (bin.equinox.io)... 54.237.133.81, 52.202.168.65, 18.205.222.128, ...
Connecting to bin.equinox.io (bin.equinox.io)|54.237.133.81|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 13921656 (13M) [application/octet-stream]
Saving to: ‘ngrok-stable-linux-amd64.zip.1’

ngrok-stable-linux- 100%[===================>]  13.28M  21.7MB/s    in 0.6s    

2024-04-09 01:58:21 (21.7 MB/s) - ‘ngrok-stable-linux-amd64.zip.1’ saved [13921656/13921656]

In [67]:
!mv ngrok /usr/local/bin/
In [68]:
!unzip ngrok-stable-linux-amd64.zip
Archive:  ngrok-stable-linux-amd64.zip
  inflating: ngrok                   
In [69]:
!ls
 app.py								 ngrok-stable-linux-amd64.zip
'EfficientNet_model_unfreezing_layers_Tanmay_Yuvraj (1).ipynb'	 ngrok-stable-linux-amd64.zip.1
 efficientnetv2-l_notop.h5					 nohup.out
 ngrok								 sample_data
In [70]:
!chmod +x ngrok
In [71]:
!pip install efficientnet
Collecting efficientnet
  Downloading efficientnet-1.1.1-py3-none-any.whl (18 kB)
Collecting keras-applications<=1.0.8,>=1.0.7 (from efficientnet)
  Downloading Keras_Applications-1.0.8-py3-none-any.whl (50 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 50.7/50.7 kB 1.9 MB/s eta 0:00:00
Requirement already satisfied: scikit-image in /usr/local/lib/python3.10/dist-packages (from efficientnet) (0.19.3)
Requirement already satisfied: numpy>=1.9.1 in /usr/local/lib/python3.10/dist-packages (from keras-applications<=1.0.8,>=1.0.7->efficientnet) (1.25.2)
Requirement already satisfied: h5py in /usr/local/lib/python3.10/dist-packages (from keras-applications<=1.0.8,>=1.0.7->efficientnet) (3.9.0)
Requirement already satisfied: scipy>=1.4.1 in /usr/local/lib/python3.10/dist-packages (from scikit-image->efficientnet) (1.11.4)
Requirement already satisfied: networkx>=2.2 in /usr/local/lib/python3.10/dist-packages (from scikit-image->efficientnet) (3.2.1)
Requirement already satisfied: pillow!=7.1.0,!=7.1.1,!=8.3.0,>=6.1.0 in /usr/local/lib/python3.10/dist-packages (from scikit-image->efficientnet) (9.4.0)
Requirement already satisfied: imageio>=2.4.1 in /usr/local/lib/python3.10/dist-packages (from scikit-image->efficientnet) (2.31.6)
Requirement already satisfied: tifffile>=2019.7.26 in /usr/local/lib/python3.10/dist-packages (from scikit-image->efficientnet) (2024.2.12)
Requirement already satisfied: PyWavelets>=1.1.1 in /usr/local/lib/python3.10/dist-packages (from scikit-image->efficientnet) (1.6.0)
Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.10/dist-packages (from scikit-image->efficientnet) (24.0)
Installing collected packages: keras-applications, efficientnet
Successfully installed efficientnet-1.1.1 keras-applications-1.0.8
In [72]:
!ngrok config add-authtoken 2eqKeuz9c2zS8dU2m1KpMCzb22i_6ZpX7JoLiMWfwXwCYy8v3
NAME:
   ngrok - tunnel local ports to public URLs and inspect traffic

DESCRIPTION:
    ngrok exposes local networked services behinds NATs and firewalls to the
    public internet over a secure tunnel. Share local websites, build/test
    webhook consumers and self-host personal services.
    Detailed help for each command is available with 'ngrok help <command>'.
    Open http://localhost:4040 for ngrok's web interface to inspect traffic.

EXAMPLES:
    ngrok http 80                    # secure public URL for port 80 web server
    ngrok http -subdomain=baz 8080   # port 8080 available at baz.ngrok.io
    ngrok http foo.dev:80            # tunnel to host:port instead of localhost
    ngrok http https://localhost     # expose a local https server
    ngrok tcp 22                     # tunnel arbitrary TCP traffic to port 22
    ngrok tls -hostname=foo.com 443  # TLS traffic for foo.com to port 443
    ngrok start foo bar baz          # start tunnels from the configuration file

VERSION:
   2.3.41

AUTHOR:
  inconshreveable - <alan@ngrok.com>

COMMANDS:
   authtoken	save authtoken to configuration file
   credits	prints author and licensing information
   http		start an HTTP tunnel
   start	start tunnels by name from the configuration file
   tcp		start a TCP tunnel
   tls		start a TLS tunnel
   update	update ngrok to the latest version
   version	print the version string
   help		Shows a list of commands or help for one command

ERROR:  Unrecognized command: config
In [76]:
!nohup streamlit run app.py &
nohup: appending output to 'nohup.out'

In [77]:
from pyngrok import ngrok

# Set the port number of your local server
port = 8501

# Start ngrok tunnel
ngrok_tunnel = ngrok.connect(port)
print("Public URL:", ngrok_tunnel.public_url)

# Keep the Colab session alive


input("Press Enter to stop the ngrok tunnel...")
ngrok.disconnect(ngrok_tunnel.public_url)
Public URL: https://c2e7-35-185-24-138.ngrok-free.app
Press Enter to stop the ngrok tunnel...
WARNING:pyngrok.process.ngrok:t=2024-04-09T02:07:53+0000 lvl=warn msg="Stopping forwarder" name=http-8501-442dc45e-e713-4130-90b7-1ff2dc5d6980 acceptErr="failed to accept connection: Listener closed"
In [ ]: